27,302 research outputs found

    Does venture capital pay off? a meta-analysis on the relationship between venture capital involvement and firm performance

    Get PDF
    Venture capital (VC) as an alternative to mainstream corporate finance (Wright and Robbie, 1998) has attracted a large amount of interest in academic research and among practitioners. On e of the main questions is whether VC adds value to firms. Yet, empirical research results are highly inconsistent. Venture capitalists do not only provide capital and monitoring, but also actively assist firms with industry-specific knowledge and skills (MacMillan et al., 1989). Furthermore, they increase the legitimacy of entrepreneurial firms (Zimmerman & Zeitz, 2002). On the other hand, venture capitalists may pressure firms to an initial public offering (IPO) in a premature stage of their life cycle (Gompers, 1996). High costs associated with an IPO may, in turn, decrease profitability and even endanger the survival of firms. Whether venture capital really pays off, thus, largely depends on contextual factors. The aim of this study is to provide a review and synthesis of existing empirical research on the relationship between VC and firm performance. Specifically, we intend to answer two research questions: (1) Does VC increase the performance of firms? (2) Which variables moderate this relationship

    Convex integration for Lipschitz mappings and counterexamples to regularity

    Full text link
    We study Lispchitz solutions of partial differential relations ∇u∈K\nabla u\in K, where uu is a vector-valued function in an open subset of RnR^n. In some cases the set of solutions turns out to be surprisingly large. The general theory is then used to construct counter-examples to regularity of solutions of Euler-Lagrange systems satisfying classical ellipticity conditions.Comment: 28 pages published versio

    The attainable superconducting Tc in a model of phase coherence by percolation

    Full text link
    The onset of macroscopic phase coherence in superconducting cuprates is considered to be determined by random percolation between mesoscopic Jahn-Teller pairs, stripes or clusters. The model is found to predict the onset of superconductivity near 6% doping, maximum Tc near 15% doping and Tc= T* at optimum doping, and accounts for the destruction of superconductivity by Zn doping near 7%. The model also predicts a relation between the pairing (pseudogap) energy and Tc in terms of experimentally measurable quantities.Comment: 3 pages + 3 postscript figure

    Prospects for Higgs Boson Searches in the Channel WH -> lnbb

    Get PDF
    We present a method how to detect the WH -> lnbb in the high luminosity LHC environment with the CMS detector. This study is performed with fast detector response simulation including high luminosity event pile up. The main aspects of reconstruction are pile up jet rejection, identification of b-jets and improvement of Higgs mass resolution. The detection potential in the SM for m(H) < 130 GeV and in the MSSM is only encouraging for high integrated luminosity. Nevertheless it is possible to extract important Higgs parameters which are useful to elucidate the nature of the Higgs sector. In combination with other channels, this channel provides valuable information on Higgs boson couplings.Comment: 8 pages, 8 figure

    Searching for Higgs Bosons in Association with Top Quark Pairs in the H -> bb Decay Mode

    Full text link
    Search for the Higgs Boson is one of the prime goals of the LHC. Higgs bosons lighter than 130 GeV decay mainly to a b-quark pair. While the detection of a directly produced Higgs boson in the bb channel is impossible because of the huge QCD background, the channel ttH -> lnqqbbbb is very promising in the Standard Model and the MSSM. We discuss an event reconstruction and selection method based on likelihood functions. The CMS detector response is performed with parametrisations obtained from detailed simulations. Various physics and detector performance scenarios are investigated and the results are presented. It turns out that excellent b-tagging performance and good mass resolution are essential for this channel.Comment: 10 pages, 6 figure

    Void Scaling and Void Profiles in CDM Models

    Full text link
    An analysis of voids using cosmological N-body simulations of cold dark matter models is presented. It employs a robust statistics of voids, that was recently applied to discriminate between data from the Las Campanas Redshift Survey and different cosmological models. Here we extend the analysis to 3D and show that typical void sizes D in the simulated galaxy samples obey a linear scaling relation with the mean galaxy separation lambda: D=D_0+nu*lambda. It has the same slope nu as in 2D, but with lower absolute void sizes. The scaling relation is able to discriminate between different cosmologies. For the best standard LCDM model, the slope of the scaling relation for voids in the dark matter halos is too steep as compared to the LCRS, with too small void sizes for well sampled data sets. The scaling relation of voids for dark matter halos with increasing mass thresholds is even steeper than that for samples of galaxy-mass halos where we sparse sample the data. This shows the stronger clustering of more massive halos. Further, we find a correlation of the void size to its central and environmental average density. While there is little sign of an evolution in samples of small DM halos with v_{circ} ~ 90 km/s, voids in halos with circular velocity over 200 km/s are larger at redshift z = 3 due to the smaller halo number density. The flow of dark matter from the underdense to overdense regions in an early established network of large scale structure is also imprinted in the evolution of the density profiles with a relative density decrease in void centers by 0.18 per redshift unit between z=3 and z=0.Comment: 12 pages, 9 eps figures, submitted to MNRA

    A differential memristive synapse circuit for on-line learning in neuromorphic computing systems

    Full text link
    Spike-based learning with memristive devices in neuromorphic computing architectures typically uses learning circuits that require overlapping pulses from pre- and post-synaptic nodes. This imposes severe constraints on the length of the pulses transmitted in the network, and on the network's throughput. Furthermore, most of these circuits do not decouple the currents flowing through memristive devices from the one stimulating the target neuron. This can be a problem when using devices with high conductance values, because of the resulting large currents. In this paper we propose a novel circuit that decouples the current produced by the memristive device from the one used to stimulate the post-synaptic neuron, by using a novel differential scheme based on the Gilbert normalizer circuit. We show how this circuit is useful for reducing the effect of variability in the memristive devices, and how it is ideally suited for spike-based learning mechanisms that do not require overlapping pre- and post-synaptic pulses. We demonstrate the features of the proposed synapse circuit with SPICE simulations, and validate its learning properties with high-level behavioral network simulations which use a stochastic gradient descent learning rule in two classification tasks.Comment: 18 Pages main text, 9 pages of supplementary text, 19 figures. Patente
    • …
    corecore